21 research outputs found

    Telerobotic radiation protection tasks in the Super Proton Synchrotron using mobile robots

    Get PDF
    Ponència presentada en la 19th International Conference on Informatics in Control, Automation and Robotics. 14-16 de juliol de 2022, Lisboa, Portugal.In this paper a complete robotic solution is presented, which allows the teleoperation of the radiation survey in the Super Proton Synchrotron (SPS) accelerator at CERN. Firstly, an introduction to radiation protection is given. Subsequently, the execution of the radiation survey in person is described and the potential of robotic solutions for such missions is outlined. After providing a brief state of the art on the subject, the development of the robot base, as well as its component selection and design is shown. Hereafter, the software implementation is explained. The test procedure of this project includes the most important requirements for a correct execution of the survey, as well as the operational steps and data treatment in detail. The results underline the correct execution of the mission, and show the advantages of the teleoperated robotic solution, such as the improved and unified measurement conditions. Thus, this robotic system will allow to significantly reduce the radiation dose of the radiation protection staff. For further development, the automation of this task is planned, which presupposes the gradual autonomization of the robotic system from assisting the user to the self-reliant execution of the survey

    From 2D to 3D Mixed Reality Human-Robot Interface in Hazardous Robotic Interventions with the Use of Redundant Mobile Manipulator

    Get PDF
    Part de la conferència: ICINCO 2021: 18th International Conference on Informatics in Control, Automation and Robotics (juliol 2021)3D Mixed Reality (MR) Human-Robot Interfaces (HRI) show promise for robotic operators to complete tasks more quickly, safely and with less training. The objective of this study is to assess the use of 3D MR HRI environment in comparison with a standard 2D Graphical User Interface (GUI) in order to control a redundant mobile manipulator. The experimental data was taken during operation with a 9 DOF manipulator mounted in a robotized train, CERN Train Inspection Monorail (TIM), used for the Beam Loss Monitor robotic measurement task in a complex hazardous intervention scenario at CERN. The efficiency and workload of an operator were compared with the use of both types of interfaces with NASA TLX method. The usage of heart rate and Galvanic Skin Response parameters for operator condition and stress monitoring was tested. The results show that teleoperation with 3D MR HRI mitigates cognitive fatigue and stress by improving the operators understanding of both the robot’s pose and the surr ounding environment or scene

    Prospects to apply machine learning to optimize the operation of the crystal collimation system at the LHC

    Get PDF
    Crystal collimation relies on the use of bent crystals to coherently deflect halo particles onto dedicated collimator absorbers. This scheme is planned to be used at the LHC to improve the betatron cleaning efficiency with high-intensity ion beams. Only particles with impinging angles below 2.5 urad relative to the crystalline planes can be efficiently channeled at the LHC nominal top energy of 7 Z TeV. For this reason, crystals must be kept in optimal alignment with respect to the circulating beam envelope to maximize the efficiency of the channeling process. Given the small angular acceptance, achieving optimal channeling conditions is particularly challenging. Furthermore, the different phases of the LHC operational cycle involve important dynamic changes of the local orbit and optics, requiring an optimized control of position and angle of the crystals relative to the beam. To this end, the possibility to apply machine learning to the alignment of the crystals, in a dedicated setup and in standard operation, is considered. In this paper, possible solutions for automatic adaptation to the changing beam parameters are highlighted and plans for the LHC ion runs starting in 2022 are discussed.peer-reviewe

    REMOTE - Robotics activities at CERN - Tele-operation, control and haptics

    No full text
    Tele-operation, control and haptics Teleoperation and the control of robots can be achieved in many ways, depending on the hardware used, the communications link, the software architecture and the operators themselves. This talk will present the basic engineering principles of teleoperation before diving into the application of haptic feedback – or the sense of touch. Haptic feedback can provide better immersion for the operators, allowing them to achieve tasks more intuitively and with finer control, but it can also introduce instabilities and further complexity to the architecture. Different applications of haptic feedback will be presented, showing where it can be useful and where it can be a disadvantage as well. Common primary devices (that the operators move to control the robot) and secondary devices (the robots themselves) will be presented. Short bio&nbsp;Eloise Matheson Eloise completed her Bachelor of Mechatronics Space Engineering and Bachelor of Science from the University of Sydney in 2010. After working as a systems engineer in Australia, she moved to Europe to undertake a Masters of Advanced Robotics from Warsaw University of Technology and Ecole Centrale de Nantes which she finished in 2014. Focusing on teleoperation, Eloise then started working at the European Space Agency in their robotics group, particularly studying how haptic interactions could help astronauts control robots over far distances. In 2017 she started a PhD in Surgical robotics for neurosurgery from Imperial College London, and after completion she began working at CERN in 2020, where she is a mechatronics engineer within the Mechatronics, Robotics and Operations section.&nbsp;</p

    Human robot interface and control methods for steerable catheter procedures in neurosurgery

    No full text
    This thesis explores novel human-machine interface hardware, methods and algorithms for robotic assisted surgical needle steering, including the design and validation of a novel neurosurgical robotic platform which is part of the Enhanced Delivery Ecosystem for Neurosurgery in 2020 (EDEN2020) European Research and Innovation Action. At the heart of this project is the bio-inspired Programmable Bevel-tip needle (PBN), which is a soft, steerable, flexible, multi-segment catheter that is able to follow curvilinear 3D paths in soft tissue. An electro-mechanical design that addresses aspects of standards applicable to medical device design, was completed and integrated in order to drive a 4-segment PBN. This system has been used as the basis for the research presented in this thesis, in understanding the optimal control modalities to move the catheter through soft tissue, and in designing an intuitive human machine interface that is appropriate for clinical use. The PBN design and actuation is inspired by how some inspects can penetrate and steer through a medium in 3D with their ovipositor in order to lay eggs. The motion is achieved by simultaneously extending and retracting different sections of the ovipositor, and using the forces from the medium to push (or steer) the tip of a section in a particular direction. This reciprocating motion minimises the net pushing force, and in doing so, decreases the displacement and strain applied on the medium. This same technique is applied when actuating a 4-segment PBN, and low level control modalities can exploit the kinematics of the catheter in following clinically advantageous motion profiles that may reduce damage to the tissue along the insertion tract. The aim of this research is to optimise a controller that can simultaneously ensure accurate path following and target reaching performance for the catheter, while minimising the displacement, hence possible damage, to the surrounding tissue. A cyclic motion controller was compared against a direct push controller in order to understand the performance of the catheter when following a path and reaching a target over 3D trajectories. An expert user was able to achieve a target position error of 0.58±0.680.58 \pm 0.68 mm for the direct controller, and 1.45±1.411.45 \pm 1.41 mm for the cyclic controller, both clinically viable results. The difference in these metrics was found to be due to the cyclic controller under-steering, as it takes more time for the desired offset between the catheter segments to be reached compared to the direct push controller. As such, a hybrid controller that uses both cyclic and direct motion profiles during an insertion was implemented, and an expert user achieved a target position error of 0.7±0.690.7 \pm 0.69 mm. In this way, the accuracy of the direct push controller can be maintained, while a patient can still benefit from the reduced tissue strain resulting from the cyclic profiles. As the system is designed for human-in-the-loop control, the Human Machine Interface (HMI) is vital in allowing users to intuitively, safely and accurately use the system. The HMI proposed here consists of the visual interface the surgeon is shown during the neurosurgical procedure and the haptic interface they interact with via a control joystick. Having designed and implemented appropriate hardware for the task, a further objective of this research was to assess the impact different interfaces have on the performance of the system, in order to identify the best combination of feedback via human studies trials. A standard neurosurgical visual interface depicts slices in the Axial, Coronal and Sagittal planes of the brain as well as a 3D volume based on preoperative imaging. It is easy for the surgeon to visualise the 2D path a needle would take through the brain, as they can extrapolate the needle tip position easily from one slice to the next. However, the same method does not work for 3D needle insertions. In this thesis, a novel visual interface was developed that combines both first person and third person viewpoints, so the surgeon can navigate with the catheter as if they were driving a vehicle, and still see the full brain volume perspective. Overlays are used to show the desired path the surgeon should follow, as well as waypoints for them to navigate through. The visual interface was evaluated by user trials \textit{in vitro} and in simulation and by \textit{ex vivo} assessments during an ovine trial. The best \textit{ex vivo} results had a target position error of 0.290.29 mm, and an orientation error of 9.389.38 degrees. A haptic interface included in the HMI employs a 2 degree of freedom haptic joystick to render forces to the user's hand that can be used as navigational cues. A user study evaluated if haptic feedback increased the user's performance, however the results concluded that haptic feedback did not significantly increase the performance if visual feedback was also present. A discussion of the results and their implications has led to suggestions for future work.Open Acces

    Enhanced Human–Robot Interface With Operator Physiological Parameters Monitoring and 3D Mixed Reality

    Get PDF
    Remote robotic interventions and maintenance tasks are frequently required in hazardous environments. Particularly, missions with a redundant mobile manipulator in the world’s most complex machine, the CERN Large Hadron Collider (LHC), are performed in a sensitive underground environment with radioactive or electromagnetic hazards, bringing further challenges in safety and reliability. The mission’s success depends on the robot’s hardware and software, and when the tasks become too unpredictable to execute autonomously, the operators need to make critical decisions. Still, in most current human-machine systems, the state of the human is neglected. In this context, a novel 3D Mixed Reality (MR) human-robot interface with the Operator Monitoring System (OMS) was developed to advance safety and task efficiency with improved spatial awareness, advanced manipulator control, and collision avoidance. However, new techniques could increase the system’s sophistication and add to the operator’s workload and stress. Therefore, for operational validation, the 3D MR interface had to be compared with an operational 2D interface, which has been used in hundreds of interventions. With the 3D MR interface, the execution of precise approach tasks was faster, with no increased workload or physiological response. The new 3D MR techniques improved the teleoperation quality and safety while maintaining similar effects on the operator. The OMS worked jointly with the interface and performed well with operators with varied teleoperation backgrounds facing a stressful real telerobotic scenario in the LHC. The paper contributes to the methodology for human-centred interface evaluation incorporating the user’s physiological state: heart rate, respiration rate and skin electrodermal activity, and combines it with the NASA TLX assessment method, questionnaires, and task execution time. It provides novel approaches to operator state identification, the GUI-OMS software architecture, and the eval..

    Web based User Interface Solution for Remote Robotic Control and Monitoring Autonomous System

    No full text
    The area of robotic control and monitoring, or automated systems, covers a wide range of applications. The operating system, the kind of control, and the size of the screen used to present information to the user all vary in different robotic or industrial systems. This article proposes a system based on a user interface for real-time robotic control or monitoring of autonomous systems using web technologies laid on open-source projects. The purpose of this software is to be highly scalable over time and easily pluggable into different types of robotic solutions and projects. It must offer a high user experience and an appealing modern UI design, allowing technicians not expert in robot operation to perform interventions or maintenance tasks. The web environment provides an ideal platform to ensure the portability of the application so that it can be released on a multitude of devices, including laptops, smartphones, and tablets. This article introduces and describes the module, features, and advantages of the Neutron Framework. It presents how the users can interact with it and how to integrate this solution inside the CERN's Mechatronic Robotic and Operation solution

    Human\u2013Robot Collaboration in Manufacturing Applications: A Review

    No full text
    This paper provides an overview of collaborative robotics towards manufacturing applications. Over the last decade, the market has seen the introduction of a new category of robots&mdash;collaborative robots (or &ldquo;cobots&rdquo;)&mdash;designed to physically interact with humans in a shared environment, without the typical barriers or protective cages used in traditional robotics systems. Their potential is undisputed, especially regarding their flexible ability to make simple, quick, and cheap layout changes; however, it is necessary to have adequate knowledge of their correct uses and characteristics to obtain the advantages of this form of robotics, which can be a barrier for industry uptake. The paper starts with an introduction of human&ndash;robot collaboration, presenting the related standards and modes of operation. An extensive literature review of works published in this area is undertaken, with particular attention to the main industrial cases of application. The paper concludes with an analysis of the future trends in human&ndash;robot collaboration as determined by the authors

    Robot-Aided Contactless Monitoring of Workers’ Cardiac Activity in Hazardous Environment

    No full text
    Vital signals monitoring is expected to support people in their daily activities in the near future, following continuous strides in developing health technologies for complex and hazardous environments. As the industry 4.0 revolution grows, robotic systems are increasingly deployed to support health monitoring, even though current robotic systems merely enable navigation and exploration of the surrounding environment to find people to be monitored, without adaptation of their behaviour to improve the quality of the health monitoring. At the European Organization for Nuclear Research (CERN), a wireless personnel safety prototype has been developed to assist workers in harsh environments, although this system needs to be improved in terms of invasiveness and portability. This work proposes a novel contactless approach based on a robotic system that adapts its behaviour to improve the performance of imaging-based algorithm for continuous physiological monitoring. Specifically, cardiac activity is monitored from camera views to obtain non-invasive and reliable vital parameter measurements. An extensive experiment has been conducted with ten healthy volunteers. The participants’ heart rates were monitored at different distances and different camera zoom levels from the robotic system, and synchronously measured by a heart rate benchmark device for the ground truth. The results highlighted several distance-zoom combinations that can be reached by the robotic system to adapt its behaviour to the boundary conditions in order to minimize the heart rate measurement error during algorithm calculations. These distance-zoom combinations are used to implement the control of the robotic system, improve the heart rate calculation algorithm and overcome the limitations of previous systems. A comparison with previous works in the literature involving the cardiac activity algorithm reveals a consistent contribution of the robot-assisted contactless monitoring system in reducing the mean absolute error in heart rate estimation

    Mixed Reality Human–Robot Interface With Adaptive Communications Congestion Control for the Teleoperation of Mobile Redundant Manipulators in Hazardous Environments

    No full text
    Robotic interventions with redundant mobile manipulators pose a challenge for telerobotics in hazardous environments, such as underwater, underground, nuclear facilities, particle accelerators, aerial or space. Communication issues can lead to critical consequences, such as imprecise manipulation resulting in collisions, breakdowns and mission failures. The research presented in this paper was driven by the needs of a real robotic intervention scenario in the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN). The goal of the work was to develop a framework for network optimisation in order to help facilitate Mixed Reality techniques such as 3D collision detection and avoidance, trajectories planning, real-time control, and automatized target approach. The teleoperator was provided with immersive interactions while preserving precise positioning of the robot. These techniques had to be adapted to delays, bandwidth limitation and their volatility in the 4G shared network of the real underground particle accelerator environment. The novel application-layer congestion control with automatic settings was applied for video and point cloud feedback. Twelve automatic setting modes were proposed with algorithms based on the camera frame rate, resolution, point cloud subsampling, network round-trip time and throughput to bandwidth ratio. Each mode was thoroughly characterized to present its specific use-case scenarios and the improvements it brings to the adaptive camera feedback control in teleoperation. Finally, the framework was presented according to which designers can optimize their Human-Robot Interfaces and sensor feedback depending on the network characteristics and task
    corecore